Risk Minimization and Minimum Description for Linear Discriminant Functions

نویسندگان

  • Haldun Aytug
  • Gary J. Koehler
  • Ling He
چکیده

Statistical learning theory provides a formal criterion for learning a concept from examples. This theory addresses directly the tradeoff in empirical fit and generalization. In practice, this leads to the structural risk minimization principle where one minimizes a bound on the overall risk functional. For learning linear discriminant functions, this bound is impacted by the minimum of two terms – the dimension and the inverse of the margin. A popular and powerful learning mechanism, support vector machines, focuses on maximizing the margin. We look at methods that focus on minimizing the dimensionality, which, coincidentally, fulfills another useful criterion-the minimum description length principle.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Maximum margin equalizers trained with the Adatron algorithm

In this paper we apply the structural risk minimization principle as an appropriate criterion to train decision feedback and transversal equalizers. We consider both linear discriminant (optimal hyperplane) and nonlinear discriminant (support vector machine) classi6ers as an alternative to the linear minimum mean-square error (MMSE) equalizer and radial basis function (RBF) networks, respective...

متن کامل

Sparse Uncorrelated Linear Discriminant Analysis

In this paper, we develop a novel approach for sparse uncorrelated linear discriminant analysis (ULDA). Our proposal is based on characterization of all solutions of the generalized ULDA. We incorporate sparsity into the ULDA transformation by seeking the solution with minimum `1-norm from all minimum dimension solutions of the generalized ULDA. The problem is then formulated as a `1-minimizati...

متن کامل

A NEW APPROACH TO THE SOLUTION OF SENSITIVITY MINIMIZATION IN LINEAR STATE FEEDBACK CONTROL

In this paper, it is shown that by exploiting the explicit parametric state feedback solution, it is feasible to obtain the ultimate solution to minimum sensitivity problem. A numerical algorithm for construction of a robust state feedback in eigenvalue assignment problem for a controllable linear system is presented. By using a generalized parametric vector companion form, the problem of eigen...

متن کامل

Multi-regularization Parameters Estimation for Gaussian Mixture Classifier based on MDL Principle

Regularization is a solution to solve the problem of unstable estimation of covariance matrix with a small sample set in Gaussian classifier. And multi-regularization parameters estimation is more difficult than single parameter estimation. In this paper, KLIM_L covariance matrix estimation is derived theoretically based on MDL (minimum description length) principle for the small sample problem...

متن کامل

VC Theory of Large Margin Multi-Category Classifiers

In the context of discriminant analysis, Vapnik’s statistical learning theory has mainly been developed in three directions: the computation of dichotomies with binary-valued functions, the computation of dichotomies with real-valued functions, and the computation of polytomies with functions taking their values in finite sets, typically the set of categories itself. The case of classes of vect...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • INFORMS Journal on Computing

دوره 20  شماره 

صفحات  -

تاریخ انتشار 2008